Goto

Collaborating Authors

 activation relu


Use Case: Classifying Wood Veneers Into Dry and Wet

#artificialintelligence

Industrial IoT is playing a significant role in Industry 4.0, especially for automating manufacturing processes. With the integration of sensors, cameras, and AI at the edge, organizations can now automate numerous processes such as visual quality control inspections. Take, for example, the manufacturing of wood veneers. An important part of the manufacturing process involves drying the wood sheets at temperatures of up to 320 F to obtain a moisture content level of around 8% to 12%¹. Following this drying process, the sheets must then be subjected to several verifications to ensure they have been dried correctly and meet quality control standards.


Unsupervised Video Prediction from a Single Frame by Estimating 3D Dynamic Scene Structure

Henderson, Paul, Lampert, Christoph H., Bickel, Bernd

arXiv.org Artificial Intelligence

Our goal in this work is to generate realistic videos given just one initial frame as input. Existing unsupervised approaches to this task do not consider the fact that a video typically shows a 3D environment, and that this should remain coherent from frame to frame even as the camera and objects move. We address this by developing a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects. It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views. Importantly, it is trained end-to-end using only the unsupervised objective of predicting future frames, without any 3D information nor segmentation annotations. Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame, and hence generate plausible and varied predictions.


Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks

Ferreira, João, Lavado, Diogo, Gonçalves, Ricardo, Knorr, Matthias, Krippahl, Ludwig, Leite, João

arXiv.org Artificial Intelligence

With the constant increase of available data in various domains, such as the Internet of Things, Social Networks or Smart Cities, it has become fundamental that agents are able to process and reason with such data in real time. Whereas reasoning over time-annotated data with background knowledge may be challenging, due to the volume and velocity in which such data is being produced, such complex reasoning is necessary in scenarios where agents need to discover potential problems and this cannot be done with simple stream processing techniques. Stream Reasoners aim at bridging this gap between reasoning and stream processing and LASER is such a stream reasoner designed to analyse and perform complex reasoning over streams of data. It is based on LARS, a rule-based logical language extending Answer Set Programming, and it has shown better runtime results than other state-of-the-art stream reasoning systems. Nevertheless, for high levels of data throughput even LASER may be unable to compute answers in a timely fashion. In this paper, we study whether Convolutional and Recurrent Neural Networks, which have shown to be particularly well-suited for time series forecasting and classification, can be trained to approximate reasoning with LASER, so that agents can benefit from their high processing speed.


Nonlinear Invariant Risk Minimization: A Causal Approach

Lu, Chaochao, Wu, Yuhuai, Hernández-Lobato, Jośe Miguel, Schölkopf, Bernhard

arXiv.org Machine Learning

Due to spurious correlations, machine learning systems often fail to generalize to environments whose distributions differ from the ones used at training time. Prior work addressing this, either explicitly or implicitly, attempted to find a data representation that has an invariant causal relationship with the target. This is done by leveraging a diverse set of training environments to reduce the effect of spurious features and build an invariant predictor. However, these methods have generalization guarantees only when both data representation and classifiers come from a linear model class. We propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution (OOD) generalization in the nonlinear setting (i.e., nonlinear representations and nonlinear classifiers). It builds upon a practical and general assumption: the prior over the data representation factorizes when conditioning on the target and the environment. Based on this, we show identifiability of the data representation up to very simple transformations. We also prove that all direct causes of the target can be fully discovered, which further enables us to obtain generalization guarantees in the nonlinear setting. Extensive experiments on both synthetic and real-world datasets show that our approach significantly outperforms a variety of baseline methods. Finally, in the concluding discussion, we further explore the aforementioned assumption and propose a general view, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes. The Agnostic Hypothesis can provide a unifying view of machine learning in terms of representation learning. More importantly, it can inspire a new direction to explore the general theory for identifying hidden causal factors, which is key to enabling the OOD generalization guarantees in machine learning.


A regime switching on Covid19 analysis and prediction in Romania

Petrica, Marian, Stochitoiu, Radu D., Leordeanu, Marius, Popescu, Ionel

arXiv.org Machine Learning

In this paper we propose a regime separation for the analysis of Covid19 on Romania combined with mathematical models of SIR and SIRD. The main regimes we study are, the free spread of the virus, the quarantine and partial relaxation and the last one is the relaxation regime. The main model we use is SIR which is a classical model, but because we can not fully trust the numbers of infected or recovered we base our analysis on the number of deceased people which is more reliable. To actually deal with this we introduce a simple modification of the SIR model to account for the deceased separately. This in turn will be our base for fitting the parameters. The estimation of the parameters is done in two steps. The first one consists in training a neural network based on SIR models to detect the regime changes. Once this is done we fit the main parameters of the SIRD model using a grid search. At the end, we make some predictions on what the evolution will be in a timeframe of a month with the fitted parameters.


Unsupervised object-centric video generation and decomposition in 3D

Henderson, Paul, Lampert, Christoph H.

arXiv.org Machine Learning

A natural approach to generative modeling of videos is to represent them as a composition of moving objects. Recent works model a set of 2D sprites over a slowly-varying background, but without considering the underlying 3D scene that gives rise to them. We instead propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background. Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects. We conduct detailed experiments on two datasets, going beyond the visual complexity supported by state-of-the-art generative approaches. We evaluate our method on depth-prediction and 3D object detection---tasks which cannot be addressed by those earlier works---and show it out-performs them even on 2D instance segmentation and tracking.


Towards a Robust Parameterization for Conditioning Facies Models Using Deep Variational Autoencoders and Ensemble Smoother

Canchumuni, Smith W. A., Emerick, Alexandre A., Pacheco, Marco Aurélio C.

arXiv.org Machine Learning

The literature about history matching is vast and despite the impressive number of methods proposed and the significant progresses reported in the last decade, conditioning reservoir models to dynamic data is still a challenging task. Ensemble-based methods are among the most successful and efficient techniques currently available for history matching. These methods are usually able to achieve reasonable data matches, especially if an iterative formulation is employed. However, they sometimes fail to preserve the geological realism of the model, which is particularly evident in reservoir with complex facies distributions. This occurs mainly because of the Gaussian assumptions inherent in these methods. This fact has encouraged an intense research activity to develop parameterizations for facies history matching. Despite the large number of publications, the development of robust parameterizations for facies remains an open problem. Deep learning techniques have been delivering impressive results in a number of different areas and the first applications in data assimilation in geoscience have started to appear in literature. The present paper reports the current results of our investigations on the use of deep neural networks towards the construction of a continuous parameterization of facies which can be used for data assimilation with ensemble methods. Specifically, we use a convolutional variational autoencoder and the ensemble smoother with multiple data assimilation. We tested the parameterization in three synthetic history-matching problems with channelized facies. We focus on this type of facies because they are among the most challenging to preserve after the assimilation of data. The parameterization showed promising results outperforming previous methods and generating well-defined channelized facies.


Subtask Gated Networks for Non-Intrusive Load Monitoring

Shin, Changho, Joo, Sunghwan, Yim, Jaeryun, Lee, Hyoseop, Moon, Taesup, Rhee, Wonjong

arXiv.org Machine Learning

Non-intrusive load monitoring (NILM), also known as energy disaggregation, is a blind source separation problem where a household's aggregate electricity consumption is broken down into electricity usages of individual appliances. In this way, the cost and trouble of installing many measurement devices over numerous household appliances can be avoided, and only one device needs to be installed. The problem has been well-known since Hart's seminal paper in 1992, and recently significant performance improvements have been achieved by adopting deep networks. In this work, we focus on the idea that appliances have on/off states, and develop a deep network for further performance improvements. Specifically, we propose a subtask gated network that combines the main regression network with an on/off classification subtask network. Unlike typical multitask learning algorithms where multiple tasks simply share the network parameters to take advantage of the relevance among tasks, the subtask gated network multiply the main network's regression output with the subtask's classification probability. When standby-power is additionally learned, the proposed solution surpasses the state-of-the-art performance for most of the benchmark cases. The subtask gated network can be very effective for any problem that inherently has on/off states.